犯罪预测问题的现有方法在表达细节时不成功,因为它们将概率值分配给大区域。本文介绍了一种具有图形卷积网络(GCN)和多变量高斯分布的新架构,以执行适用于任何时空数据的高分辨率预测。通过利用GCN的灵活结构并提供细分算法,我们以高分辨率在高分辨率下解决稀疏问题。我们用图形卷积门控经常性单位(Graph-concgru)构建我们的模型,以学习空间,时间和分类关系。在图形的每个节点中,我们学习来自GCN的提取特征的多变量概率分布。我们对现实生活和合成数据集进行实验,我们的模型获得了最佳验证和基线模型中的最佳测试分数,具有显着改进。我们表明我们的模型不仅是生成的,而且是精确的。
translated by 谷歌翻译
Researchers are doing intensive work on satellite images due to the information it contains with the development of computer vision algorithms and the ease of accessibility to satellite images. Building segmentation of satellite images can be used for many potential applications such as city, agricultural, and communication network planning. However, since no dataset exists for every region, the model trained in a region must gain generality. In this study, we trained several models in China and post-processing work was done on the best model selected among them. These models are evaluated in the Chicago region of the INRIA dataset. As can be seen from the results, although state-of-art results in this area have not been achieved, the results are promising. We aim to present our initial experimental results of a building segmentation from satellite images in this study.
translated by 谷歌翻译
Using Structural Health Monitoring (SHM) systems with extensive sensing arrangements on every civil structure can be costly and impractical. Various concepts have been introduced to alleviate such difficulties, such as Population-based SHM (PBSHM). Nevertheless, the studies presented in the literature do not adequately address the challenge of accessing the information on different structural states (conditions) of dissimilar civil structures. The study herein introduces a novel framework named Structural State Translation (SST), which aims to estimate the response data of different civil structures based on the information obtained from a dissimilar structure. SST can be defined as Translating a state of one civil structure to another state after discovering and learning the domain-invariant representation in the source domains of a dissimilar civil structure. SST employs a Domain-Generalized Cycle-Generative (DGCG) model to learn the domain-invariant representation in the acceleration datasets obtained from a numeric bridge structure that is in two different structural conditions. In other words, the model is tested on three dissimilar numeric bridge models to translate their structural conditions. The evaluation results of SST via Mean Magnitude-Squared Coherence (MMSC) and modal identifiers showed that the translated bridge states (synthetic states) are significantly similar to the real ones. As such, the minimum and maximum average MMSC values of real and translated bridge states are 91.2% and 97.1%, the minimum and the maximum difference in natural frequencies are 5.71% and 0%, and the minimum and maximum Modal Assurance Criterion (MAC) values are 0.998 and 0.870. This study is critical for data scarcity and PBSHM, as it demonstrates that it is possible to obtain data from structures while the structure is actually in a different condition or state.
translated by 谷歌翻译
This paper presents the preliminary findings of a semi-supervised segmentation method for extracting roads from sattelite images. Artificial Neural Networks and image segmentation methods are among the most successful methods for extracting road data from satellite images. However, these models require large amounts of training data from different regions to achieve high accuracy rates. In cases where this data needs to be of more quantity or quality, it is a standard method to train deep neural networks by transferring knowledge from annotated data obtained from different sources. This study proposes a method that performs path segmentation with semi-supervised learning methods. A semi-supervised field adaptation method based on pseudo-labeling and Minimum Class Confusion method has been proposed, and it has been observed to increase performance in targeted datasets.
translated by 谷歌翻译
Extracting building heights from satellite images is an active research area used in many fields such as telecommunications, city planning, etc. Many studies utilize DSM (Digital Surface Models) generated with lidars or stereo images for this purpose. Predicting the height of the buildings using only RGB images is challenging due to the insufficient amount of data, low data quality, variations of building types, different angles of light and shadow, etc. In this study, we present an instance segmentation-based building height extraction method to predict building masks with their respective heights from a single RGB satellite image. We used satellite images with building height annotations of certain cities along with an open-source satellite dataset with the transfer learning approach. We reached, the bounding box mAP 59, the mask mAP 52.6, and the average accuracy value of 70% for buildings belonging to each height class in our test set.
translated by 谷歌翻译
Transfer Learning methods are widely used in satellite image segmentation problems and improve performance upon classical supervised learning methods. In this study, we present a semantic segmentation method that allows us to make land cover maps by using transfer learning methods. We compare models trained in low-resolution images with insufficient data for the targeted region or zoom level. In order to boost performance on target data we experiment with models trained with unsupervised, semi-supervised and supervised transfer learning approaches, including satellite images from public datasets and other unlabeled sources. According to experimental results, transfer learning improves segmentation performance 3.4% MIoU (Mean Intersection over Union) in rural regions and 12.9% MIoU in urban regions. We observed that transfer learning is more effective when two datasets share a comparable zoom level and are labeled with identical rules; otherwise, semi-supervised learning is more effective by using the data as unlabeled. In addition, experiments showed that HRNet outperformed building segmentation approaches in multi-class segmentation.
translated by 谷歌翻译
CNN-based surrogates have become prevalent in scientific applications to replace conventional time-consuming physical approaches. Although these surrogates can yield satisfactory results with significantly lower computation costs over small training datasets, our benchmarking results show that data-loading overhead becomes the major performance bottleneck when training surrogates with large datasets. In practice, surrogates are usually trained with high-resolution scientific data, which can easily reach the terabyte scale. Several state-of-the-art data loaders are proposed to improve the loading throughput in general CNN training; however, they are sub-optimal when applied to the surrogate training. In this work, we propose SOLAR, a surrogate data loader, that can ultimately increase loading throughput during the training. It leverages our three key observations during the benchmarking and contains three novel designs. Specifically, SOLAR first generates a pre-determined shuffled index list and accordingly optimizes the global access order and the buffer eviction scheme to maximize the data reuse and the buffer hit rate. It then proposes a tradeoff between lightweight computational imbalance and heavyweight loading workload imbalance to speed up the overall training. It finally optimizes its data access pattern with HDF5 to achieve a better parallel I/O throughput. Our evaluation with three scientific surrogates and 32 GPUs illustrates that SOLAR can achieve up to 24.4X speedup over PyTorch Data Loader and 3.52X speedup over state-of-the-art data loaders.
translated by 谷歌翻译
Timely and effective response to humanitarian crises requires quick and accurate analysis of large amounts of text data - a process that can highly benefit from expert-assisted NLP systems trained on validated and annotated data in the humanitarian response domain. To enable creation of such NLP systems, we introduce and release HumSet, a novel and rich multilingual dataset of humanitarian response documents annotated by experts in the humanitarian response community. The dataset provides documents in three languages (English, French, Spanish) and covers a variety of humanitarian crises from 2018 to 2021 across the globe. For each document, HUMSET provides selected snippets (entries) as well as assigned classes to each entry annotated using common humanitarian information analysis frameworks. HUMSET also provides novel and challenging entry extraction and multi-label entry classification tasks. In this paper, we take a first step towards approaching these tasks and conduct a set of experiments on Pre-trained Language Models (PLM) to establish strong baselines for future research in this domain. The dataset is available at https://blog.thedeep.io/humset/.
translated by 谷歌翻译
通信搜索是刚性点云注册算法中的重要步骤。大多数方法在每个步骤都保持单个对应关系,并逐渐删除错误的通信。但是,建立一对一的对应关系非常困难,尤其是当将两个点云与许多本地功能匹配时。本文提出了一种优化方法,该方法在将部分点云与完整点云匹配时保留每个关键点的所有可能对应关系。然后,通过考虑匹配成本,这些不确定的对应关系通过估计的刚性转换逐渐更新。此外,我们提出了一个新的点功能描述符,该描述符衡量本地点云区域之间的相似性。广泛的实验表明,即使在同一类别中与不同对象匹配时,我们的方法也优于最先进的方法(SOTA)方法。值得注意的是,我们的方法在将真实世界的噪声深度图像注册为模板形状时的表现优于SOTA方法。
translated by 谷歌翻译
创伤后应激障碍(PTSD)是一种长期衰弱的精神状况,是针对灾难性生活事件(例如军事战斗,性侵犯和自然灾害)而发展的。 PTSD的特征是过去的创伤事件,侵入性思想,噩梦,过度维护和睡眠障碍的闪回,所有这些都会影响一个人的生活,并导致相当大的社会,职业和人际关系障碍。 PTSD的诊断是由医学专业人员使用精神障碍诊断和统计手册(DSM)中定义的PTSD症状的自我评估问卷进行的。在本文中,这是我们第一次收集,注释并为公共发行准备了一个新的视频数据库,用于自动PTSD诊断,在野生数据集中称为PTSD。该数据库在采集条件下表现出“自然”和巨大的差异,面部表达,照明,聚焦,分辨率,年龄,性别,种族,遮挡和背景。除了描述数据集集合的详细信息外,我们还提供了评估野生数据集中PTSD的基于计算机视觉和机器学习方法的基准。此外,我们建议并评估基于深度学习的PTSD检测方法。提出的方法显示出非常有希望的结果。有兴趣的研究人员可以从:http://www.lissi.fr/ptsd-dataset/下载PTSD-in-wild数据集的副本
translated by 谷歌翻译